The mouth, ears, eyes, brain: How we speak and understand each other
Speaking and understanding is the core of human communication. But how are speech sounds actually created and what happens in the brain when we hear? How is vision involved in language comprehension? State-of-the-art technologies help to get to the bottom of these questions.
Our ability to talk to each other is the glue that holds society together. We do this with such ease that we are hardly aware of what it actually takes to speak and listen.
UZH’s Linguistic Technology Platform LiRI presents various methods for the experimental study of language, from instrumental investigation of speech production to neuroimaging techniques used to analyze the complex relationship between hearing and understanding. At Scientifica, you will have the opportunity to speak with researchers who use computational and machine methods to analyze language and the speech process. Our researchers will take turns presenting technologies and research projects in three main areas.
Phonetics and voice analysis
Saturday: 4 p.m. – 7 p.m., and Sunday: 2 p.m. – 5 p.m.
A voice is more than just the words we speak. It has many characteristics and reveals much more than we generally realize. It locates us geographically, socially and temporally. Our voices have individual characteristics that forensic phonetics, for example, takes advantage of. You will learn how voices are identified, whether our voices are as unique as fingerprints, and how computer-aided analysis works in voice recognition and forensic phonetics.
Using an ultrasound system, we will also take a look inside the vocal tract to see how the movements of the tongue produce different speech sounds.
Hearing Research
Saturday: 2:00 p.m. – 4:00 p.m., and Sunday: 11:00 a.m. – 2:00 p.m.
In order to communicate, we need to be able to hear each other. For people with hearing loss or tinnitus, it is often difficult to understand speech even in the presence of moderate background noise. As a result, they run the risk of losing access to social contact, finding participation in social activities difficult and often tiring. At the exhibition booth, we will present neural attention training. Visitors will learn to regulate their brain activity while playing a computer game. In doing so, they train brain patterns that are conducive to attention control.
Speech Technology
Saturday: 11 a.m. – 2 p.m.
Language and communication are more than just words. Important information is also conveyed through the visual sense. Experts from LiRI’s speech technology team will provide insight into the development of automatic image recognition to analyze visual communication such as gestures, and how this technology can be used to study speech. Another development that will be presented concerns the management of interactions in Zoom to facilitate online communication and make it more efficient.